Train Models and Track Experiments with Modelbit and Weights & Biases
We're thrilled to announce a new integration with Modelbit. Here's what you need to know:
Created on August 7|Last edited on August 7
Comment
Introduction
Modelbit and Weights & Biases are excited to announce an integration for data scientists and machine learning engineers.
The new integration between Modelbit and Weights & Biases allows ML practitioners to train and deploy their models in Modelbit while logging and visualizing training progress in Weights & Biases.

Why Modelbit and W&B fit so well together
Deploying ML models into production has typically been perceived as a tedious and intimidating task. Modelbit was created to make deploying ML models into production as simple as calling modelbit.deploy().
While simplifying model deployment is a step in the right direction, it’s equally critical that ML teams are set up to successfully track experiments when training models. This is exactly where a platform like Weights & Biases comes in. Traditionally, teams will train models in something like a Jupyter notebook. In recent years, many of those teams are now logging training data to Weights & Biases so they can track their experiments. However, once the ML team is happy with the model’s performance, the issue of deploying the model into production comes right back into focus.
What if we could train our models in the same platform that we use to deploy them? That’s a question we heard from our customers, and led to the release of Modelbit’s training jobs. When you train ML models in Modelbit, they are instantly available to call via REST API once you’re ready to serve them in production.
Of course, this doesn’t eliminate the need for tracking your model’s training experiments. Which is why we are very excited to announce that Modelbit now seamlessly integrates with Weights & Biases to help you log model training progress directly to your W&B projects.
In this tutorial, we’ll demonstrate how you can integrate Weights & Biases with Modelbit. To demonstrate the full power of the integration, we’ll train a neural net for binary classification using PyTorch. We’ll run the training in Modelbit, measure our training experiments in Weights & Biases, and then finish by deploying to production in Modelbit.
Let’s begin!
1: Setting up Modelbit
Using Modelbit, you can deploy any ML model directly from your Python notebook (or git) to Snowflake, Redshift, and REST.
To get started:
Install Modelbit
First, install the Modelbit package via pip:
pip install modelbit
Log into Modelbit
import modelbitmb = modelbit.login()
That's it! Now, we can start pushing our models to deployment.
2: Setting up Weights & Biases
As discussed above, Weights & Biases make it easy to log experiments, and visualize results directly from your dashboard.
To get started:
Install W&B
First, Install the CLI and Python library for interacting with the Weights & Biases API:
pip install wandb
Log into Weights & Biases
To log your experiments using Weights & Biases, create your account here. This will give you an API key. Next, log in and paste your API key when prompted.
wandb login
Import Weights & Biases
Lastly, import the wandb library in your notebook to push models for logging.
import wandb
3: Integrate Modelbit with Weights & Biases
To seamlessly deploy your machine learning models to Modelbit and automatically register logging jobs with Weights & Biases, we should integrate them. The steps:
Grab your Weights & Biases API Key from the Weights & Biases dashboard:

Provide that API key to Modelbit
Next, login to the Modelbit dashboard and navigate to the Integrations in Settings.

And we're done! Now we can proceed with training a model, deploying it, and monitoring it with Weights & Biases.
Tutorial: Model Development, Deployment and Logging
For this tutorial, we’ll train a neural network for binary classification using PyTorch. We’ll deploy and train it in Modelbit, and log its training process in Weights & Biases.
Dataset
For this demonstration, we’ll consider a dummy 2D spiral dataset shown below:

You can download that data here, however, you are free to proceed with any multidimensional regression/classification data that you may have.
Model
Next, let's define our PyTorch model class.
import torchimport torch.nn as nnclass NeuralNetwork(nn.Module):def __init__(self, hidden_size, classes=2):super().__init__()self.fc1 = nn.Linear(2, hidden_size)self.fc2 = nn.Linear(hidden_size, hidden_size)self.fc3 = nn.Linear(hidden_size, classes)def forward(self, x):import torch.nn.functional as F## Forward passx = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = F.softmax(self.fc3(x))return xdef accuracy(self, outputs, labels):## Compute accuracyreturn int(torch.sum(torch.argmax(outputs, axis = 1) == y))/len(outputs)
Here, we develop a neural network with one hidden layer. It takes 2D data as input and outputs the corresponding softmax probabilities.

Deployment and logging
Last, let's move to training, deployment and logging. The first thing we want to do here is define some hyperparameters in the global scope:
HIDDEN_SIZE = 400 ## Number of neurons in the hidden layerTOTAL_EPOCHS = 300 ## Number of epochsLR = 0.005 ## Learning rate
Next, we define a W&B training and logging method. This will be responsible for local model training and logging it simultaneously in W&B. Let’s call it wandb_training.
Here, we initialize a logging job in Weights & Biases using wandb.init(), instantiate the model object, and define the optimizer and loss. Finally, we fit the model, etc.
In every epoch, we log the training metrics using wandb.log() method. Specifically, we track the loss and accuracy of the model.
ef wandb_training():# initialize W&Bwandb.init(project="Modelbit With W&B",config={"learning_rate": 0.005,"architecture": "Neural Network","dataset": "Spiral","total_epochs": 300,"hidden_size": 400})# initalize modelmodel = NeuralNetwork(HIDDEN_SIZE, classes=2)# define loss functioncriterion = nn.CrossEntropyLoss()# define optimizeroptimizer = torch.optim.Adam(model.parameters(), lr=0.005)# trainfor epoch in range(TOTAL_EPOCHS):outputs = model(X)loss = criterion(outputs, y)# Backward and optimizeoptimizer.zero_grad()loss.backward()optimizer.step()# compute accuracyacc = model.accuracy(outputs, y)# log model training results in W&Bwandb.log({"acc": acc, "loss": loss.item()})# finish logging once trainedwandb.finish()# return the trained modelreturn model
We recommend starting by executing the training locally before sending it off to Modelbit. This can be useful to identify any errors in the code/network layers.
wandb_training()
Once we are satisfied, we deploy our training job to Modelbit like so:
model = mb.add_job(wandb_training, deployment_name="Modelbit_With_WB")
And we're done! Let's run this cell:

As shown, Modelbit uploads the dependencies and the data. This is followed by a success message that the training job will be ready soon. Navigating to Modelbit’s dashboard shows us the deployed training job:

Next, we run the job and wait for Modelbit to train it.

Once the training is over, some training logs are available under “Training Jobs."

Moreover, the training job also shows in the W&B dashboard, with the same name mentioned in the wandb.init() call — “Modelbit With W&B." On our dashboard, we see both accuracy and loss logs in the W&B dashboard, as specified in the wandb.log(...) call. For our project here, they look like this:
Run set
7
Conclusion
The new integration allows ML teams to train their models in Modelbit, all while sending logs to Weights & Biases for experiment tracking and fine tuning. Once you’re ready to deploy into production, your ML model in Modelbit will be made available to call as a REST API.
Try out the new integration for free today (both Modelbit and Weights & Biases have free plans) and let us know what you think!
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.